3 research outputs found

    Developing Efficient and Effective Intrusion Detection System using Evolutionary Computation

    Get PDF
    The internet and computer networks have become an essential tool in distributed computing organisations especially because they enable the collaboration between components of heterogeneous systems. The efficiency and flexibility of online services have attracted many applications, but as they have grown in popularity so have the numbers of attacks on them. Thus, security teams must deal with numerous threats where the threat landscape is continuously evolving. The traditional security solutions are by no means enough to create a secure environment, intrusion detection systems (IDSs), which observe system works and detect intrusions, are usually utilised to complement other defence techniques. However, threats are becoming more sophisticated, with attackers using new attack methods or modifying existing ones. Furthermore, building an effective and efficient IDS is a challenging research problem due to the environment resource restrictions and its constant evolution. To mitigate these problems, we propose to use machine learning techniques to assist with the IDS building effort. In this thesis, Evolutionary Computation (EC) algorithms are empirically investigated for synthesising intrusion detection programs. EC can construct programs for raising intrusion alerts automatically. One novel proposed approach, i.e. Cartesian Genetic Programming, has proved particularly effective. We also used an ensemble-learning paradigm, in which EC algorithms were used as a meta-learning method to produce detectors. The latter is more fully worked out than the former and has proved a significant success. An efficient IDS should always take into account the resource restrictions of the deployed systems. Memory usage and processing speed are critical requirements. We apply a multi-objective approach to find trade-offs among intrusion detection capability and resource consumption of programs and optimise these objectives simultaneously. High complexity and the large size of detectors are identified as general issues with the current approaches. The multi-objective approach is used to evolve Pareto fronts for detectors that aim to maintain the simplicity of the generated patterns. We also investigate the potential application of these algorithms to detect unknown attacks

    Grammatical Evolution for Detecting Cyberattacks in Internet of Things Environments

    Get PDF
    The Internet of Things (IoT) is revolutionising nearly every aspect of modern life, playing an ever greater role in both industrial and domestic sectors. The increasing frequency of cyber-incidents is a consequence of the pervasiveness of IoT. Threats are becoming more sophisticated, with attackers using new attacks or modifying existing ones. Security teams must deal with a diverse and complex threat landscape that is constantly evolving. Traditional security solutions cannot protect such sys- tems adequately and so researchers have begun to use Machine Learning algorithms to discover effective defence systems. In this paper, we investigate how one approach from the domain of evolutionary computation - grammatical evolution - can be used to identify cyberattacks in IoT environments. The experiments were conducted on up-to-date datasets and compared with state- of-the-art algorithms. The potential application of evolutionary computation-based approaches to detect unknown attacks is also examined and discusse

    Bayesian Adaptive Path Allocation Techniques for Intra-Datacenter Workloads

    Get PDF
    Data center networks (DCNs) are the backbone of many cloud and Internet services. They are vulnerable to link failures, that occur on a daily basis, with a high frequency. Service disruption due to link failure may incur financial losses, compliance breaches and reputation damage. Performance metrics such as packet loss and routing flaps are negatively affected by these failure events. We propose a new Bayesian learning approach towards adaptive path allocation that aims to improve DCN performance by reducing both packet loss and routing flaps ratios. The proposed approach incorporates historical information about link failure and usage probabilities into its allocation procedure, and updates this information on-the-fly during DCN operational time. We evaluate the proposed framework using an experimental platform built with the POX controller and the Mininet emulator. Compared with a benchmark shortest path algorithm, the results show that the proposed methods perform better in terms of reducing the packet loss and routing flaps
    corecore